28 research outputs found

    What do citizens communicate about during crises? Analyzing twitter use during the 2011 UK riots

    Get PDF
    Abstract The use of social media during crises has been explored in a variety of natural and man-made crisis situations. Yet, most of these studies have focused exclusively on the communication strategies and messages sent by crisis responders. Surprisingly little research has been done on how crisis publics (i.e., those people interested in or affected by the crisis) use social media during such events. Our article addresses this gap in the context of citizens' Twitter use during the 2011 riots in the UK. Focusing on communications with and about police forces in two cities, we analyzed 5984 citizen tweets collected during the event for content and sentiment. Comparing the two cases, our findings suggest that citizens' Twitter communication follows a general logic of concerns, but can also be influenced very easily by single, non-crisis related events such as perceived missteps in a police force's Twitter communication. Our study provides insights into citizens' concerns and communication patterns during crises adding to our knowledge about the dynamics of citizens' use of social media in such times. It further highlights the fragmentation in Twitter audiences especially in later stages of the crisis. These observations can be utilized by police forces to help determine the appropriate organizational responses that facilitate coping across various stages of crisis events. In addition, they illustrate limitations in current theoretical understandings of crisis response strategies, adding the requirement for adaptivity, flexibility and ambiguity in organizational responses to address the observed plurivocality of crisis audiences

    Detecting Dysfluencies in Stuttering Therapy Using wav2vec 2.0

    Full text link
    Stuttering is a varied speech disorder that harms an individual's communication ability. Persons who stutter (PWS) often use speech therapy to cope with their condition. Improving speech recognition systems for people with such non-typical speech or tracking the effectiveness of speech therapy would require systems that can detect dysfluencies while at the same time being able to detect speech techniques acquired in therapy. This paper shows that fine-tuning wav2vec 2.0 [1] for the classification of stuttering on a sizeable English corpus containing stuttered speech, in conjunction with multi-task learning, boosts the effectiveness of the general-purpose wav2vec 2.0 features for detecting stuttering in speech; both within and across languages. We evaluate our method on FluencyBank , [2] and the German therapy-centric Kassel State of Fluency (KSoF) [3] dataset by training Support Vector Machine classifiers using features extracted from the finetuned models for six different stuttering-related event types: blocks, prolongations, sound repetitions, word repetitions, interjections, and - specific to therapy - speech modifications. Using embeddings from the fine-tuned models leads to relative classification performance gains up to 27% w.r.t. F1-score.Comment: Accepted at Interspeech 202

    Best practice in police social media adaptation

    Get PDF
    Summary: Best Practice in Police Social Media Adaptation. This document describes best practice of European police forces in adapting social media. The description of these practices stems from a workshop series and other events where police ICT experts met with academics and industry experts; and from a study of the Twitter usage of British police forces during the 2011 riots. Grouped in nine categories, we describe different uses and implementation strategies of social media by police forces. Based on these examples, we show that there have been numerous ways in which police forces benefitted from adopting social media, ranging from improved information for investigations and an improved relationship with the public to a more efficient use of resources

    A Stutter Seldom Comes Alone -- Cross-Corpus Stuttering Detection as a Multi-label Problem

    Full text link
    Most stuttering detection and classification research has viewed stuttering as a multi-class classification problem or a binary detection task for each dysfluency type; however, this does not match the nature of stuttering, in which one dysfluency seldom comes alone but rather co-occurs with others. This paper explores multi-language and cross-corpus end-to-end stuttering detection as a multi-label problem using a modified wav2vec 2.0 system with an attention-based classification head and multi-task learning. We evaluate the method using combinations of three datasets containing English and German stuttered speech, one containing speech modified by fluency shaping. The experimental results and an error analysis show that multi-label stuttering detection systems trained on cross-corpus and multi-language data achieve competitive results but performance on samples with multiple labels stays below over-all detection results.Comment: Accepted for presentation at Interspeech 2023. arXiv admin note: substantial text overlap with arXiv:2210.1598

    Multi-class Detection of Pathological Speech with Latent Features: How does it perform on unseen data?

    Full text link
    The detection of pathologies from speech features is usually defined as a binary classification task with one class representing a specific pathology and the other class representing healthy speech. In this work, we train neural networks, large margin classifiers, and tree boosting machines to distinguish between four different pathologies: Parkinson's disease, laryngeal cancer, cleft lip and palate, and oral squamous cell carcinoma. We demonstrate that latent representations extracted at different layers of a pre-trained wav2vec 2.0 system can be effectively used to classify these types of pathological voices. We evaluate the robustness of our classifiers by adding room impulse responses to the test data and by applying them to unseen speech corpora. Our approach achieves unweighted average F1-Scores between 74.1% and 96.4%, depending on the model and the noise conditions used. The systems generalize and perform well on unseen data of healthy speakers sampled from a variety of different sources.Comment: Submitted to ICASSP 202

    Classifying Dementia in the Presence of Depression: A Cross-Corpus Study

    Full text link
    Automated dementia screening enables early detection and intervention, reducing costs to healthcare systems and increasing quality of life for those affected. Depression has shared symptoms with dementia, adding complexity to diagnoses. The research focus so far has been on binary classification of dementia (DEM) and healthy controls (HC) using speech from picture description tests from a single dataset. In this work, we apply established baseline systems to discriminate cognitive impairment in speech from the semantic Verbal Fluency Test and the Boston Naming Test using text, audio and emotion embeddings in a 3-class classification problem (HC vs. MCI vs. DEM). We perform cross-corpus and mixed-corpus experiments on two independently recorded German datasets to investigate generalization to larger populations and different recording conditions. In a detailed error analysis, we look at depression as a secondary diagnosis to understand what our classifiers actually learn.Comment: Accepted at INTERSPEECH 202
    corecore